Incorporating external knowledge into the response generation process is essential to building more helpful and reliable dialog agents. However, collecting knowledge-grounded conversations is often costly, calling for a better pre-trained model for grounded dialog generation that generalizes well w.r.t. different types of knowledge. In this work, we propose KPT (Keyword-guided Pre-Training), a novel self-supervised pre-training method for grounded dialog generation without relying on extra knowledge annotation. Specifically, we use a pre-trained language model to extract the most uncertain tokens in the dialog as keywords. With these keywords, we construct two kinds of knowledge and pre-train a knowledge-grounded response generation model, aiming at handling two different scenarios: (1) the knowledge should be faithfully grounded; (2) it can be selectively used. For the former, the grounding knowledge consists of keywords extracted from the response. For the latter, the grounding knowledge is additionally augmented with keywords extracted from other utterances in the same dialog. Since the knowledge is extracted from the dialog itself, KPT can be easily performed on a large volume and variety of dialogue data. We considered three data sources (open-domain, task-oriented, conversational QA) with a total of 2.5M dialogues. We conduct extensive experiments on various few-shot knowledge-grounded generation tasks, including grounding on dialog acts, knowledge graphs, persona descriptions, and Wikipedia passages. Our comprehensive experiments and analyses demonstrate that KPT consistently outperforms state-of-the-art methods on these tasks with diverse grounding knowledge.
translated by 谷歌翻译
Autoregressive language modeling (ALM) have been successfully used in self-supervised pre-training in Natural language processing (NLP). However, this paradigm has not achieved comparable results with other self-supervised approach in computer vision (e.g., contrastive learning, mask image modeling). In this paper, we try to find the reason why autoregressive modeling does not work well on vision tasks. To tackle this problem, we fully analyze the limitation of visual autoregressive methods and proposed a novel stochastic autoregressive image modeling (named SAIM) by the two simple designs. First, we employ stochastic permutation strategy to generate effective and robust image context which is critical for vision tasks. Second, we create a parallel encoder-decoder training process in which the encoder serves a similar role to the standard vision transformer focus on learning the whole contextual information, and meanwhile the decoder predicts the content of the current position, so that the encoder and decoder can reinforce each other. By introducing stochastic prediction and the parallel encoder-decoder, SAIM significantly improve the performance of autoregressive image modeling. Our method achieves the best accuracy (83.9%) on the vanilla ViT-Base model among methods using only ImageNet-1K data. Transfer performance in downstream tasks also show that our model achieves competitive performance.
translated by 谷歌翻译
Diverse data formats and ontologies of task-oriented dialogue (TOD) datasets hinder us from developing general dialogue models that perform well on many datasets and studying knowledge transfer between datasets. To address this issue, we present ConvLab-3, a flexible dialogue system toolkit based on a unified TOD data format. In ConvLab-3, different datasets are transformed into one unified format and loaded by models in the same way. As a result, the cost of adapting a new model or dataset is significantly reduced. Compared to the previous releases of ConvLab (Lee et al., 2019b; Zhu et al., 2020b), ConvLab-3 allows developing dialogue systems with much more datasets and enhances the utility of the reinforcement learning (RL) toolkit for dialogue policies. To showcase the use of ConvLab-3 and inspire future work, we present a comprehensive study with various settings. We show the benefit of pre-training on other datasets for few-shot fine-tuning and RL, and encourage evaluating policy with diverse user simulators.
translated by 谷歌翻译
This paper studies the problem of stochastic continuum-armed bandit with constraints (SCBwC), where we optimize a black-box reward function $f(x)$ subject to a black-box constraint function $g(x)\leq 0$ over a continuous space $\mathcal X$. We model reward and constraint functions via Gaussian processes (GPs) and propose a Rectified Pessimistic-Optimistic Learning framework (RPOL), a penalty-based method incorporating optimistic and pessimistic GP bandit learning for reward and constraint functions, respectively. We consider the metric of cumulative constraint violation $\sum_{t=1}^T(g(x_t))^{+},$ which is strictly stronger than the traditional long-term constraint violation $\sum_{t=1}^Tg(x_t).$ The rectified design for the penalty update and the pessimistic learning for the constraint function in RPOL guarantee the cumulative constraint violation is minimal. RPOL can achieve sublinear regret and cumulative constraint violation for SCBwC and its variants (e.g., under delayed feedback and non-stationary environment). These theoretical results match their unconstrained counterparts. Our experiments justify RPOL outperforms several existing baseline algorithms.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
视频中的战斗检测是当今监视系统和流媒体的流行率的新兴深度学习应用程序。以前的工作主要依靠行动识别技术来解决这个问题。在本文中,我们提出了一种简单但有效的方法,该方法从新的角度解决了任务:我们将战斗检测模型设计为动作感知功能提取器和异常得分生成器的组成。另外,考虑到视频收集帧级标签太费力了,我们设计了一个弱监督的两阶段训练计划,在此我们使用在视频级别标签上计算出的多个实体学习损失来培训得分生成器,并采用自我训练的技术以进一步提高其性能。在公开可用的大规模数据集(UBI-Fights)上进行了广泛的实验,证明了我们方法的有效性,并且数据集的性能超过了几种先前的最先进的方法。此外,我们收集了一个新的数据集VFD-2000,该数据集专门研究视频战斗检测,比现有数据集更大,场景更大。我们的方法的实现和拟议的数据集将在https://github.com/hepta-col/videofightdetection上公开获得。
translated by 谷歌翻译
对抗性训练(AT)通常被认为是防御对抗性例子的最有效的方法之一,可能会在很大程度上损害标准绩效,因此对工业规模的生产和应用的有用性有限。令人惊讶的是,这种现象在自然语言处理(NLP)任务中完全相反,在该任务中甚至可以从中受益。我们注意到NLP任务中AT的优点可能来自离散和符号输入空间。为了借用NLP风格的优势,我们提出了离散的对抗训练(DAT)。 DAT利用VQGAN改革图像数据以离散类似文本的输入,即视觉单词。然后,它可以最大程度地减少这种离散图像的最大风险,并具有符号对抗扰动。我们从分布的角度进一步提供了解释,以证明DAT的有效性。作为增强视觉表示的插件技术,DAT可以在多个任务上取得重大改进,包括图像分类,对象检测和自我监督学习。尤其是,该模型通过胶带自动编码(MAE)预先训练并由我们的DAT进行微调,而没有额外的数据可以在Imagenet-C上获得31.40 MCE,并且在Stylized-Imagenet上进行了32.77%的TOP-1准确性,建立了新的状态 - 艺术。该代码将在https://github.com/alibaba/easyrobust上找到。
translated by 谷歌翻译
多跳问题回答(QA)需要对多个文档进行推理,以回答一个复杂的问题并提供可解释的支持证据。但是,提供支持证据不足以证明模型已经执行了所需的推理来达到正确的答案。大多数现有的多跳质量检查方法也无法回答大部分子问题,即使他们的父母问题得到了正确的回答。在本文中,我们为多跳QA提出了基于及时的保护学习(PCL)框架,该框架从多跳QA任务中获取了新知识,同时保留了在单跳QA任务上学习的旧知识,从而减轻了遗忘。具体来说,我们首先在现有的单跳质量检查任务上训练模型,然后冻结该模型,并通过为多跳质量检查任务分配其他子网络来扩展它。此外,为了调整预训练的语言模型以刺激特定多跳问题所需的推理类型,我们学习了新型子网络的软提示,以执行特定于类型的推理。 HOTPOTQA基准测试的实验结果表明,PCL具有多跳质量质量质量检查的竞争力,并且在相应的单跳子问题上保留了良好的性能,这表明PCL通过忘记通过忘记来减轻知识丧失的功效。
translated by 谷歌翻译
毫米波(mmwave)雷达在不利的环境中起作用,例如在烟,雨,雪,照明等不良环境中起作用。先前的工作探索了从嘈杂且稀疏的MMWAVE雷达信号中重建3D骨骼或网格的可能性。但是,目前尚不清楚我们如何准确地从跨场景的MMWave信号重建3D主体,以及与摄像机相比的性能,当单独使用MMWave雷达或将它们与摄像机结合时,这是需要考虑的重要方面。为了回答这些问题,首先设计并构建了多个传感器,以收集大规模数据集。该数据集由在不同场景中的同步和校准的MMWave雷达点云和RGB(D)图像组成,以及在场景中人类的骨架/网格注释。使用此数据集,我们使用来自不同传感器的输入来训练最先进的方法,并在各种情况下对其进行测试。结果表明,1)尽管生成点云的噪音和稀疏性,MMWave雷达可以比RGB摄像机获得更好的重建精度,但比深度摄像头还差; 2)MMWave雷达的重建受不利天气条件的影响,而RGB(D)摄像机受到严重影响。此外,对数据集的分析和结果对改善MMWave雷达重建的重建以及来自不同传感器的信号的组合的洞察力。
translated by 谷歌翻译